Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Adversarial attack defense model with residual dense block self-attention mechanism and generative adversarial network
Yuming ZHAO, Shenkai GU
Journal of Computer Applications    2022, 42 (3): 921-929.   DOI: 10.11772/j.issn.1001-9081.2021030431
Abstract371)   HTML9)    PDF (804KB)(104)       Save

Neural network has outstanding performance on image classification tasks. However, it is vulnerable to adversarial examples generated by adding small perturbations, which makes it output incorrect classification results. The current defense methods have the problems of insufficient image feature extraction ability and less attention to the features of key areas of the image. To address these issues, a Defense model that fuses Residual Dense Block (RDB) Self-Attention mechanism and Generative Adversarial Network (GAN), namely RD-SA-DefGAN, was proposed. GAN was combined with Projected Gradient Descent (PGD) attacking algorithm. The adversarial samples generated by PGD attacking algorithm were input to the training sample set, and the training process of model was stabilized by conditional constraints. The model also introduced RDB and self-attention mechanism, fully extracted features from the image, and enhanced the contribution of features from the key areas of the image. Experimental results on CIFAR10, STL10, and ImageNet20 datasets show that RD-SA-DefGAN can effectively defend from adversarial attacks, and outperforms Adv.Training, Adv-BNN, and Rob-GAN methods on defending PGD adversarial attacks. Compared to the most similar algorithm Rob-GAN, RD-SA-DefGAN improved the defense success rate by 5.0 percentage points to 9.1 percentage points on affected images in CIFAR10 dataset, with the disturbance threshold ranged from 0.015 to 0.070.

Table and Figures | Reference | Related Articles | Metrics